Skip to content

Add terraform-policy-controls plugin with terraform-runtask skill#69

Open
srlynch1 wants to merge 3 commits into
hashicorp:mainfrom
srlynch1:run-task
Open

Add terraform-policy-controls plugin with terraform-runtask skill#69
srlynch1 wants to merge 3 commits into
hashicorp:mainfrom
srlynch1:run-task

Conversation

@srlynch1
Copy link
Copy Markdown
Contributor

@srlynch1 srlynch1 commented Apr 21, 2026

Summary

  • Adds a new terraform-policy-controls plugin under terraform/policy-controls/ with a terraform-runtask skill that fetches HCP Terraform/TFE run task stages, results, and outcomes via the REST API — filling a gap not covered by the Terraform MCP server.
  • Registers the plugin in the top-level .claude-plugin/marketplace.json so it's discoverable alongside the other HashiCorp plugins.

The skill ships a shell script (scripts/get-run-task-results.sh) that authenticates with TFE_TOKEN, sideloads task results via include=task_results, then fetches each outcome's HTML body for rich reporting. The SKILL.md guides Claude to present results in a four-tier structure (summary → stages → task results → outcomes) and synthesize actionable findings from the HTML bodies.

Test plan

  • jq . parses terraform/policy-controls/.claude-plugin/plugin.json and .claude-plugin/marketplace.json without error
  • ./terraform/policy-controls/skills/terraform-runtask/scripts/get-run-task-results.sh run-<id> returns valid JSON with task_stages, summary, and per-outcome body_html content against a real HCP Terraform run
  • Script exits cleanly on: missing TFE_TOKEN, invalid run ID format, 401/404 responses
  • URL input form (.../runs/run-abc123) correctly extracts run ID and auto-detects TFE_HOSTNAME
  • Plugin installs from the marketplace and the terraform-runtask skill activates on relevant prompts ("check the run tasks", "get task results for run-xxx")

Retrieves HCP Terraform run task stages, results, and outcomes via the
TFE REST API to fill gaps not covered by the MCP terraform tools.
Add the plugin manifest and marketplace entry so the new
terraform-runtask skill is installable alongside the other
HashiCorp plugins.
@srlynch1 srlynch1 requested a review from a team as a code owner April 21, 2026 01:13
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 21, 2026

Tessl Skill Review Results

Skill Status Review Score Change
terraform/policy-controls/skills/terraform-runtask ✅ PASSED 92%

Detailed Review

terraform/policy-controls/skills/terraform-runtask — 92% (PASSED)
  Description: 100%
    specificity: 3/3 - The description lists specific concrete actions: 'Retrieve and display HCP Terraform Enterprise run task results for a given run.' It clearly names the domain (HCP Terraform Enterprise) and the specific operation (retrieving and displaying run task results).
    trigger_term_quality: 3/3 - Excellent coverage of natural trigger terms including 'run task results', 'run task checks', 'task stage statuses', 'check the run tasks', 'what did the run tasks say', 'show run task results', 'get task results for run-xxx', and 'run task outcomes'. These are phrases users would naturally say.
    completeness: 3/3 - Clearly answers both 'what' (retrieve and display HCP Terraform Enterprise run task results) and 'when' (explicit 'Use this skill whenever...' clause with detailed trigger phrases and scenarios).
    distinctiveness_conflict_risk: 3/3 - Highly distinctive — it targets a very specific niche (HCP Terraform Enterprise run task results) with domain-specific terminology like 'run-xxx', 'task stage statuses', and 'Terraform Cloud/Enterprise run'. Unlikely to conflict with other skills.

    Assessment: This is a strong skill description that clearly defines its purpose, provides explicit trigger guidance, and uses domain-specific terminology that makes it highly distinguishable. It follows the recommended pattern of stating what it does followed by when to use it, with concrete example phrases that users would naturally say.

  Content: 83%
    conciseness: 2/3 - The skill is fairly detailed and well-structured, but includes some unnecessary verbosity — e.g., the extended explanation of three empty scenarios could be more compact, and the 'Reading the JSON output' section documents field names that Claude could infer from the JSON itself. The Tier 4 example is helpful but lengthy. Overall mostly efficient with some bloat.
    actionability: 3/3 - Provides a concrete, executable bash command with clear environment variable requirements, specific JSON field paths, exact markdown formatting templates for each tier, and detailed examples of expected output. The guidance is specific enough to be directly followed without ambiguity.
    workflow_clarity: 3/3 - The workflow is clearly sequenced (identify run → fetch data → present results → enrich with MCP context) with explicit error handling, edge case coverage, and clear instructions for each step. The tiered presentation model provides a structured approach with validation through the summary counts. The enrichment step with MCP context adds a verification/completeness checkpoint.
    progressive_disclosure: 2/3 - The content is well-organized with clear sections and headers, but it's a monolithic document that could benefit from splitting the JSON field reference and presentation templates into separate files. The script is properly referenced externally, but the detailed field documentation and edge case handling inline makes the file quite long. No bundle files were provided to verify the script reference.

    Assessment: This is a well-crafted skill with strong actionability and workflow clarity — it provides concrete commands, detailed output formatting templates, and thorough edge case handling. Its main weakness is moderate verbosity: the JSON field reference section and the extended edge case explanations could be more concise or split into reference files. The tiered presentation model and actionable insights synthesis (Tier 4) add genuine value beyond what Claude would produce by default.

Suggestions:

  • Move the 'Reading the JSON output' field reference into a separate REFERENCE.md file and link to it, reducing the main skill's token footprint while preserving the detail for when it's needed.
  • Tighten the edge case section — the three scenarios could be presented as a compact table or bullet list rather than verbose paragraphs with repeated formatting examples.

Checks: frontmatter validity, required fields, body structure, examples, line count.
Review score is informational — not used for pass/fail gating.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants